human result show mistrust
Cybersecurity teams preferring human results shows mistrust in AI
Mistrust in artificial intelligence (AI) continues to manifest itself as the emerging technology spreads to industries spanning the tech world, with cybersecurity being no exception. A report released today by WhiteHat Security has revealed while over half of surveyed organisations use artificial intelligence (AI) or machine learning in their security stack, nearly 60% are still more confident in cyberthreat findings verified by humans over AI. The research is based on a survey of 102 industry professionals at RSA Conference 2020. The survey also suggested 75% of respondents us application security tools as part of their security infrastructure, and 40% of these applications use a hybrid AI and human-based verification system. WhiteHat says the combined factors of advancing and growing security threats and the technology talent gap has meant the need for AI and machine learning tools in security protocols is essential.
- Questionnaire & Opinion Survey (1.00)
- Research Report > New Finding (0.40)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.81)